Independence
We are neither tied to any specific lender so we can be impartial.
Knowledge
Track record of successfully providing funding for over 25 years.
Speed
We endeavour to only introduce funders who match your need.
Home > Our Blogs > How AI is quietly killing good commercial finance deals

Our Blogs

How AI is quietly killing good commercial finance deals

Computer says no – how AI is quietly killing good finance deals

 

There’s a lot of noise at the moment about how AI is improving lending through faster lending, better risk assessment and less friction – and in some cases it’s true. But what people don’t talk about as much is what happens when a business doesn’t quite fit the model – often the computer says ‘no’!

 

Most finance applications today are heavily automated – you fill in an online form, upload your accounts and bank statements, and the system does the rest. If everything lines up neatly, you’ll probably get an answer quite quickly, but if it doesn’t, brace yourself. In reality – most businesses don’t line up neatly and most struggle with the online forms.

 

One thing that really sticks with us is how fragile applications become the moment they need explaining. Industry stats suggest that once an application gets kicked back for clarification or additional information, the success rate drops to around 11 or 12 percent. That’s not because the business is suddenly riskier – it’s just because the system doesn’t like ambiguity and views it as risk.

 

We see this all the time. A business submits an application, everything looks broadly fine, but there’s one line in the accounts that raises a question, or the cash flow doesn’t follow a textbook pattern – or there’s a timing issue with invoices. Perfectly normal stuff if you understand how businesses operate.

 

But to an automated system, it’s a red flag and what’s frustrating is that, to a human, the explanation is often obvious. Seasonal trading or a one-off contract – or maybe a delayed payment from a large customer. None of this is unusual to a human, but AI doesn’t do nuance – it just does pattern matching. And once something is flagged, it’s very hard to unflag it.

 

Another issue we see creeping in is over-reliance on the output. More junior people are being trained to trust what the system tells them, rather than question it. AI can summarise accounts very well by pulling out ratios, trends, and anomalies. But it can’t tell you whether those anomalies matter. It also can’t tell when something smells wrong.

 

There was an example that came up recently where, on paper, everything looked fine. The numbers stacked up, the application ticked all the boxes, but when you spoke to the borrower, the story didn’t quite make sense. A few gentle questions, nothing aggressive, just probing, and it became clear there were gaps. AI would have passed it straight through – a human didn’t.

 

On the other side of that, I regularly see good, well-run businesses rejected because their story is a bit messy – and most businesses are messy. Growth and cash flow are both messy – real life rarely behaves like a spreadsheet. This is where experience still matters.

 

A good broker isn’t just there to submit applications, they know which lenders can cope with complexity, which ones will actually listen, and which ones are going to hide behind policy. They know how to frame a story properly, so it doesn’t get derailed by something minor. More importantly, they act as a buffer between the business and the machine.

 

AI isn’t the enemy – used properly, it’s a useful tool. But when it becomes the decision-maker rather than the assistant, good businesses fall through the cracks. And that’s the real risk right now – not being declined – but being perfectly fundable and never getting the chance to explain why.

Get a FREE quote